Goto

Collaborating Authors

 xeon phi


Intel Axes Nervana AI Chips In Favor Of Habana Labs

#artificialintelligence

Intel said it is ending work on its Nervana neural network processors in favor of the artificial intelligence chips it gained with the chipmaker's recent $2 billion acquisition of Habana Labs. The Santa Clara, Calif.-based company said Friday it has ended development of its Nervana NNP-T training chips and will deliver on current customer commitments for its Nervana NNP-I inference chips, so that it can move forward with Habana Labs' Gaudi and Goya processors in their place. "Habana product line offers the strong, strategic advantage of a unified, highly-programmable architecture for both inference and training," Intel said in a statement provided to CRN. "By moving to a single hardware architecture and software stack for data center AI acceleration, our engineering teams can join forces and focus on delivering more innovation, faster to our customers." Analysts questioned whether Intel would move forward with Nervana after the chipmaker announced its acquisition of Habana Labs in mid-December. The deal was only announced a little over a month after Intel in November revealed more details of its Nervana chips, which were meant to compete with Nvidia's growing footprint of GPUs in the AI acceleration market.


A Decade of Accelerated Computing Augurs Well For GPUs

#artificialintelligence

While accelerators have been around for some time to boost the performance of simulation and modeling applications, accelerated computing didn't gain traction for most people until the commercialization of the Tesla line of GPUs for general computing by Nvidia. This year marked the tenth annual Nvidia GPU Technology Conference (GTC). I have been to all but one starting with the inaugural event in 2009. Back then it was a much smaller group. Attendance has leaped 10X with this year's meeting attracting over 9,000 participants.


Why Intel Is Tweaking Xeon Phi For Deep Learning

#artificialintelligence

If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today. Software will run most efficiently on hardware that is tuned for it, although we are used to thinking of that process in a mirror image, where programmers tweak their code to take advantage of the forward-looking features a chip maker conceives of four or five years before they are etched into its transistors and delivered as a product. The competition is fierce these days, and Intel has to move fast if it is to keep its compute hegemony in the datacenter. That is why at the Intel Developer Forum in San Francisco the company put a new path on the Knights family of many-core processors that will see the company deliver a version of this chip specifically tuned for machine learning workloads.


Intel's New Processors: A Machine-learning Perspective - insideBIGDATA

@machinelearnbot

Machine learning and its younger sibling deep learning are continuing their acceleration in terms of increasing the value of enterprise data assets across a variety of problem domains. A recent talk by Dr. Amitai Armon, Chief Data-Scientist of Intel's Advanced Analytics department, at the O'reilly Artificial Intelligence conference, New-York, September 27 2016, focused on the usage of Intel's new server processors for various machine learning tasks as well as considerations in choosing and matching processors for specific machine learning tasks. Intel formed a machine learning task force with a mission to determine how the company can advance the machine learning domain. The vast majority of machine learning code today runs on Intel servers but the company wanted to do even better for the present and the future use cases. We need to understand the needs for these domains and prepare processors for those needs," said Dr. Amitai Armon. "This is not a simple challenge because in machine learning you have many algorithms, many data types and the field is constantly evolving.


Intel Proclaims Machine Learning Nervana

#artificialintelligence

In a blog post today, Intel (NASDAQ:INTC) CEO Brian Krzanich announced the Nervana Neural Network Processor (NNP). The Intel Nervana NNP promises to revolutionize AI computing across myriad industries. Using Intel Nervana technology, companies will be able to develop entirely new classes of AI applications that maximize the amount of data processed and enable customers to find greater insights – transforming their businesses... We have multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models. This puts us on track to exceed the goal we set last year of achieving 100 times greater AI performance by 2020.


CPUs, GPUs, and Now AI Chips

#artificialintelligence

If you haven't heard about the artificial intelligence (AI) machine-learning (ML) craze that uses deep neural networks (DNN) and deep learning (DL) to tackle everything from voice recognition to making self-driving cars a reality, then you probably haven't heard about Google's new Tensor Processing Unit (TPU), Intel's Lake Crest, or Knupath's Hermosa. These are just a few of the vendors looking to deliver platforms targeting neural networks. The TPU contains a large 8-bit matrix multiply unit (Figure 1). It essentially optimizes the number-crunching required by DNN; large floating-point number-crunchers need not apply. The TPU is actually a coprocessor managed by a conventional host CPU via the TPU's PCI Express interface.


CPUs, GPUs, and Now AI Chips

#artificialintelligence

If you haven't heard about the artificial intelligence (AI) machine-learning (ML) craze that uses deep neural networks (DNN) and deep learning (DL) to tackle everything from voice recognition to making self-driving cars a reality, then you probably haven't heard about Google's new Tensor Processing Unit (TPU), Intel's Lake Crest, or Knupath's Hermosa. These are just a few of the vendors looking to deliver platforms targeting neural networks. The TPU contains a large 8-bit matrix multiply unit (Figure 1). It essentially optimizes the number-crunching required by DNN; large floating-point number-crunchers need not apply. The TPU is actually a coprocessor managed by a conventional host CPU via the TPU's PCI Express interface.


Intel Forms New AI Group Reporting Directly To CEO Brian Krzanich

#artificialintelligence

As we have been writing for a while now, artificial intelligence will transform pretty much everything we do in our lives in the next five years. AI is actually in use today helping us to match faces, identify photos, videos, the spoken word, doing our taxes, improving collaboration, and even help in healthcare diagnosis, and soon, will help drive our cars and trucks for us. While AI has been around for a while, the big breakthrough was machine learning using deep neural networks that actually got smarter with more information you threw at it. GPUs, with NVIDIA being the biggest recent beneficiary, have become the most recent standard for cutting edge deep neural network training, and inference today is spread across CPUs, GPUs, FPGAs, ASICs and even DSPs. AI is a quick moving target and I think it's unwise to think the engines today will be static in the future.


Intel Forms New AI Group Reporting Directly To CEO Brian Krzanich

#artificialintelligence

As we have been writing for a while now, artificial intelligence will transform pretty much everything we do in our lives in the next five years. AI is actually in use today helping us to match faces, identify photos, videos, the spoken word, doing our taxes, improving collaboration, and even help in healthcare diagnosis, and soon, will help drive our cars and trucks for us. While AI has been around for a while, the big breakthrough was machine learning using deep neural networks that actually got smarter with more information you threw at it. GPUs, with NVIDIA being the biggest recent beneficiary, have become the most recent standard for cutting edge deep neural network training, and inference today is spread across CPUs, GPUs, FPGAs, ASICs and even DSPs. AI is a quick moving target and I think it's unwise to think the engines today will be static in the future.


Why Intel Is Tweaking Xeon Phi For Deep Learning

#artificialintelligence

If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today. Software will run most efficiently on hardware that is tuned for it, although we are used to thinking of that process in a mirror image, where programmers tweak their code to take advantage of the forward-looking features a chip maker conceives of four or five years before they are etched into its transistors and delivered as a product. The competition is fierce these days, and Intel has to move fast if it is to keep its compute hegemony in the datacenter. That is why at the Intel Developer Forum in San Francisco the company put a new path on the Knights family of many-core processors that will see the company deliver a version of this chip specifically tuned for machine learning workloads.